Multimedia Tools and Applications - Conventional steganography focuses on invisibility and undetectability, as the main concern is to make the algorithms immune to steganalysis. Zero-steg-anography... 相似文献
Limited bandwidth resources lead to a number of challenges especially for eHealth applications, which are communicated over IP and wireless networks. These multimedia services include high-resolution videos and have very large file sizes that require a high level of compression to overcome this limitation. Therefore, there is an acute demand for the research community to provide an efficient multimedia framework to encode medical videos with high quality specifically under the conditions of an error-prone environment. Both an affordable delivery framework and effective coding techniques are extremely desirable for the delivery of high-quality eHealth video applications for transmission over heterogeneous networks and devices. In this paper, we propose and demonstrate a multimedia framework to support eHealth applications, which has an improved coding scheme that uses an SVC-scalable extension of MPEC-4 AVC/H.264. Simulation results show that the proposed scheme achieves a significant improvement in terms of the PSNR-Y gain and reduces the picture quality degradation caused by artifacts and distortions, compared to the existing scheme. 相似文献
This paper proposes an adaptive watermarking scheme for e-government document images. The adaptive scheme combines the discrete cosine transform (DCT) and the singular value decomposition (SVD) using luminance masking. As a core of masking model in the human visual system (HVS), luminance masking is implemented to improve noise sensitivity. Genetic algorithm (GA), subsequently, is employed for the optimization of the scaling factor of the masking. Involving a number of steps, the scheme proposed through this study begins by calculating the mask of the host image using luminance masking. It is then continued by transforming the mask on each area into all frequencies domain. The watermark image, following this, is embedded by modifying the singular values of DCT-transformed host image with singular values of mask coefficient of host image and the control parameter of DCT-transformed watermark image using Genetic Algorithm (GA). The use of both the singular values and the control parameter respectively, in this case, is not only to improve the sensitivity of the watermark performance but also to avoid the false positive problem. The watermark image, afterwards, is extracted from the distorted images. The experiment results show the improved adaptive performance of the proposed scheme is in resistant to several types of attacks in comparison with the previous schemes; the adaptive performance refers to the adaptive parameter of the luminance masking functioned to improve the performance or robustness of an image from any attacks. 相似文献
In the design phase of business and IT system development, it is desirable to predict the properties of the system-to-be. A number of formalisms to assess qualities such as performance, reliability and security have therefore previously been proposed. However, existing prediction systems do not allow the modeler to express uncertainty with respect to the design of the considered system. Yet, in contemporary business, the high rate of change in the environment leads to uncertainties about present and future characteristics of the system, so significant that ignoring them becomes problematic. In this paper, we propose a formalism, the Predictive, Probabilistic Architecture Modeling Framework (P2AMF), capable of advanced and probabilistically sound reasoning about business and IT architecture models, given in the form of Unified Modeling Language class and object diagrams. The proposed formalism is based on the Object Constraint Language (OCL). To OCL, P2AMF adds a probabilistic inference mechanism. The paper introduces P2AMF, describes its use for system property prediction and assessment and proposes an algorithm for probabilistic inference. 相似文献
Software end-users need to sign licenses to seal an agreement with the product
providers. Habitually, users agree with the license (i.e. terms and conditions) without fully
understanding the agreement. To address this issue, an ontological model is developed to
formulate the user requirements and license agreements formally. This paper, introduces
ontological model that includes the abstract license ontology of common features found in
di?erent license agreements. The abstract license ontology is then extended to a few real
world license agreements. The resulting model can be used for di?erent purposes such as
querying the appropriate licenses for a speciˉc requirement or checking the license terms and
conditions with user requirements. 相似文献
Visual Cryptography (VC) is gaining attraction during the past few years to secure the visual information in the transmission network. It enables the visual data i.e. handwritten notes, photos, printed text, etc. to encrypt in such a way that their decryption can be done through the human visual framework. Hence, no computational assistance is required for the decryption of the secret images they can be seen through naked eye. In this paper, a novel enhanced halftoning-based VC scheme is proposed that works for both binary and color images. Fake share is generated by the combination of random black and white pixels. The proposed algorithm consists of 3 stages i.e., detection, encryption, and decryption. Halftoning, Encryption, (2, 2) visual cryptography and the novel idea of fake share, make it even more secure and improved. As a result, it facilitates the original restored image to the authentic user, however, the one who enters the wrong password gets the combination of fake share with any real share. Both colored and black images can be processed with minimal capacity using the proposed scheme.
Automatic key concept identification from text is the main challenging task in information extraction, information retrieval, digital libraries, ontology learning, and text analysis. The main difficulty lies in the issues with the text data itself, such as noise in text, diversity, scale of data, context dependency and word sense ambiguity. To cope with this challenge, numerous supervised and unsupervised approaches have been devised. The existing topical clustering-based approaches for keyphrase extraction are domain dependent and overlooks semantic similarity between candidate features while extracting the topical phrases. In this paper, a semantic based unsupervised approach (KP-Rank) is proposed for keyphrase extraction. In the proposed approach, we exploited Latent Semantic Analysis (LSA) and clustering techniques and a novel frequency-based algorithm for candidate ranking is introduced which considers locality-based sentence, paragraph and section frequencies. To evaluate the performance of the proposed method, three benchmark datasets (i.e. Inspec, 500N-KPCrowed and SemEval-2010) from different domains are used. The experimental results show that overall, the KP-Rank achieved significant improvements over the existing approaches on the selected performance measures.